13 research outputs found

    Digital forensic analysis methodology for private browsing: Firefox and Chrome on Linux as a case study

    Get PDF
    The web browser has become one of the basic tools of everyday life. A tool that is increasingly used to manage personal information. This has led to the introduction of new privacy options by the browsers, including private mode. In this paper, a methodology to explore the effectiveness of the private mode included in most browsers is proposed. A browsing session was designed and conducted in Mozilla Firefox and Google Chrome running on four different Linux environments. After analyzing the information written to disk and the information available in memory, it can be observed that Firefox and Chrome did not store any browsing-related information on the hard disk. However, memory analysis reveals that a large amount of information could be retrieved in some of the environments tested. For example, for the case where the browsers were executed in a VMware virtual machine, it was possible to retrieve most of the actions performed, from the keywords entered in a search field to the username and password entered to log in to a website, even after restarting the computer. In contrast, when Firefox was run on a slightly hardened non-virtualized Linux, it was not possible to retrieve any browsing-related artifacts after the browser was closedS

    Assessment, Design and Implementation of a Private Cloud for MapReduce Applications

    Get PDF
    Scientific computation and data intensive analyses are ever more frequent. On the one hand, the MapReduce programming model has gained a lot of attention for its applicability in large parallel data analyses and Big Data applications. On the other hand, Cloud computing seems to be increasingly attractive in solving these computing problems that demand a lot of resources. This paper explores the potential symbiosis between MapReduce and Cloud Computing, in order to create a robust and scalable environment to execute MapReduce workflows regardless of the underlaying infrastructure. The main goal of this work is to provide an easy-to-install interface, so as non-expert scientists can deploy a suitable testbed for their MapReduce experiments on local resources of their institution. Testing cases were performed in order to evaluate the required time for the whole executing process on a real clusterS

    Using an extended Roofline Model to understand data and thread affinities on NUMA systems

    Get PDF
    Today’s microprocessors include multicores that feature a diverse set of compute cores and onboard memory subsystems connected by complex communication networks and protocols. The analysis of factors that affect performance in such complex systems is far from being an easy task. Anyway, it is clear that increasing data locality and affinity is one of the main challenges to reduce the access latency to data. As the number of cores increases, the influence of this issue on the performance of parallel codes is more and more important. Therefore, models to characterize the performance in such systems are broadly demanded. This paper shows the use of an extension of the well known Roofline Model adapted to the main features of the memory hierarchy present in most of the current multicore systems. Also the Roofline Model was extended to show the dynamic evolution of the execution of a given code. In order to reduce the overheads to get the information needed to obtain this dynamic Roofline Model, hardware counters present in most of the current microprocessors are used. To illustrate its use, two simple parallel vector operations, SAXPY and SDOT, were considered. Different access strides and initial location of vectors in memory modules were used to show the influence of different scenarios in terms of locality and affinity. The effect of thread migration were also considered. We conclude that the proposed Roofline Model is an useful tool to understand and characterise the behaviour of the execution of parallel codes in multicore systemsThis work has been partially supported by the Ministry of Education and Science of Spain, FEDER funds under contract TIN 2010-17541, and Xunta de Galicia, EM2013/041. It has been developed in the framework of the European network HiPEAC and the Spanish network CAPAP-HS

    Automatic Extraction of Road Points from Airborne LiDAR Based on Bidirectional Skewness Balancing

    Get PDF
    Road extraction from Light Detection and Ranging (LiDAR) has become a hot topic over recent years. Nevertheless, it is still challenging to perform this task in a fully automatic way. Experiments are often carried out over small datasets with a focus on urban areas and it is unclear how these methods perform in less urbanized sites. Furthermore, some methods require the manual input of critical parameters, such as an intensity threshold. Aiming to address these issues, this paper proposes a method for the automatic extraction of road points suitable for different landscapes. Road points are identified using pipeline filtering based on a set of constraints defined on the intensity, curvature, local density, and area. We focus especially on the intensity constraint, as it is the key factor to distinguish between road and ground points. The optimal intensity threshold is established automatically by an improved version of the skewness balancing algorithm. Evaluation was conducted on ten study sites with different degrees of urbanization. Road points were successfully extracted in all of them with an overall completeness of 93%, a correctness of 83%, and a quality of 78%. These results are competitive with the state-of-the-artThis work has received financial support from the Consellería de Cultura, Educación e Ordenación Universitaria (accreditation 2019-2022 ED431G-2019/04 and reference competitive group 2019-2021, ED431C 2018/19) and the European Regional Development Fund (ERDF), which acknowledges the CiTIUS-Research Center in Intelligent Technologies of the University of Santiago de Compostela as a Research Center of the Galician University System. This work was also supported in part by Babcock International Group PLC (Civil UAVs Initiative Fund of Xunta de Galicia) and the Ministry of Education, Culture and Sport, Government of Spain (Grant Number TIN2016-76373-P)S

    A fast and optimal pathfinder using airborne LiDAR data

    Get PDF
    Determining the optimal path between two points in a 3D point cloud is a problem that have been addressed in many different situations: from road planning and escape routes determination, to network routing and facility layout. This problem is addressed using different input information, being 3D point clouds one of the most valuables. Its main utility is to save costs, whatever the field of application is. In this paper, we present a fast algorithm to determine the least cost path in an Airborne Laser Scanning point cloud. In some situations, like finding escape routes for instance, computing the solution in a very short time is crucial, and there are not many works developed in this theme. State of the art methods are mainly based on a digital terrain model (DTM) for calculating these routes, and these methods do not reflect well the topography along the edges of the graph. Also, the use of a DTM leads to a significant loss of both information and precision when calculating the characteristics of possible routes between two points. In this paper, a new method that does not require the use of a DTM and is suitable for airborne point clouds, whether they are classified or not, is proposed. The problem is modeled by defining a graph using the information given by a segmentation and a Voronoi Tessellation of the point cloud. The performance tests show that the algorithm is able to compute the optimal path between two points by processing up to 678,820 points per second in a point cloud of 40,000,000 points and 16 km² of extensionThis work has received financial support from the Consellería de Cultura, Educación e Ordenación Universitaria (accreditation 2019-2022 ED431G-2019/04, reference competitive group 2019-2021, ED431C 2018/19) and the European Regional Development Fund (ERDF), which acknowledges the CiTIUS-Research Center in Intelligent Technologies of the University of Santiago de Compostela as a Research Center of the Galician University System. This work was also supported by the Ministry of Economy and Competitiveness, Government of Spain (Grant No. PID2019-104834 GB-I00). We also acknowledge the Centro de Supercomputación de Galicia (CESGA) for the use of their computersS

    TrustE-VC: Trustworthy Evaluation Framework for Industrial Connected Vehicles in the Cloud

    Get PDF
    The integration between cloud computing and vehicular ad hoc networks, namely, vehicular clouds (VCs), has become a significant research area. This integration was proposed to accelerate the adoption of intelligent transportation systems. The trustworthiness in VCs is expected to carry more computing capabilities that manage large-scale collected data. This trend requires a security evaluation framework that ensures data privacy protection, integrity of information, and availability of resources. To the best of our knowledge, this is the first study that proposes a robust trustworthiness evaluation of vehicular cloud for security criteria evaluation and selection. This article proposes three-level security features in order to develop effectiveness and trustworthiness in VCs. To assess and evaluate these security features, our evaluation framework consists of three main interconnected components: 1) an aggregation of the security evaluation values of the security criteria for each level; 2) a fuzzy multicriteria decision-making algorithm; and 3) a simple additive weight associated with the importance-performance analysis and performance rate to visualize the framework findings. The evaluation results of the security criteria based on the average performance rate and global weight suggest that data residency, data privacy, and data ownership are the most pressing challenges in assessing data protection in a VC environment. Overall, this article paves the way for a secure VC using an evaluation of effective security features and underscores directions and challenges facing the VC community. This article sheds light on the importance of security by design, emphasizing multiple layers of security when implementing industrial VCsThis work was supported in part by the Ministry of Education, Culture, and Sport, Government of Spain under Grant TIN2016-76373-P, in part by the Xunta de Galicia Accreditation 2016–2019 under Grant ED431G/08 and Grant ED431C 2018/2019, and in part by the European Union under the European Regional Development FundS

    Fast Ground Filtering of Airborne LiDAR Data Based on Iterative Scan-Line Spline Interpolation

    Get PDF
    Over the last two decades, a wide range of applications have been developed from Light Detection and Ranging (LiDAR) point clouds. Most LiDAR-derived products require the distinction between ground and non-ground points. Because of this, ground filtering its being one of the most studied topics in the literature and robust methods are nowadays available. However, these methods have been designed to work with offline data and they are generally not well suited for real-time scenarios. Aiming to address this issue, this paper proposes an efficient method for ground filtering of airborne LiDAR data based on scan-line processing. In our proposal, an iterative 1-D spline interpolation is performed in each scan line sequentially. The final spline knots of a scan line are taken into account for the next scan line, so that valuable 2-D information is also considered without compromising computational efficiency. Points are labelled into ground and non-ground by analysing their residuals to the final spline. When tested against synthetic ground truth, the method yields a mean kappa value of 88.59% and a mean total error of 0.50%. Experiments with real data also show satisfactory results under visual inspection. Performance tests on a workstation show that the method can process up to 1 million points per second. The original implementation was ported into a low-cost development board to demonstrate its feasibility to run in embedded systems, where throughput was improved by using programmable logic hardware acceleration. Analysis shows that real-time filtering is possible in a high-end board prototype, as it can process the amount of points per second that current lightweight scanners acquire with low-energy consumptionThis work was supported by the Ministry of Education, Culture, and Sport, Government of Spain (Grant Number TIN2016-76373-P), the Consellería de Cultura, Educación e Ordenación Universitaria (accreditation 2016–2019, ED431G/08, and ED431C 2018/2019), and the European Union (European Regional Development Fund—ERDF)S

    A Developer-Friendly “Open Lidar Visualizer and Analyser” for Point Clouds With 3D Stereoscopic View

    Get PDF
    Light detection and ranging is being a hot topic in the remote sensing field, and the development of robust point cloud processing methods is essential for the adoption of this technology. In order to understand, evaluate, and show these methods, it is a key to visualize their outputs. Several visualization tools exist, although it is usually difficult to find the suited one for a specific application. On the one hand, proprietary (closed source) projects are not flexible enough because they cannot be modified to adapt them to particular applications. On the other hand, current open source projects lack an effortless way to create custom visualizations. For these reasons, we present Olivia, a developer-friendly open source visualization tool for point clouds. Olivia provides the backbone for any type of point cloud visualization, and it can be easily extended and tailored to meet the requirements of a specific application. It supports stereoscopic 3-D view, aiding both the evaluation and presentation of processing methods. In this paper, several cases of study are presented to demonstrate the usefulness of Olivia along with its computational performance.S

    Characterizing zebra crossing zones using LiDAR data

    Get PDF
    Light detection and ranging (LiDAR) scanning in urban environments leads to accurate and dense three-dimensional point clouds where the different elements in the scene can be precisely characterized. In this paper, two LiDAR-based algorithms that complement each other are proposed. The first one is a novel profiling method robust to noise and obstacles. It accurately characterizes the curvature, the slope, the height of the sidewalks, obstacles, and defects such as potholes. It was effective for 48 of 49 detected zebra crossings, even in the presence of pedestrians or vehicles in the crossing zone. The second one is a detailed quantitative summary of the state of the zebra crossing. It contains information about the location, the geometry, and the road marking. Coarse grain statistics are more prone to obstacle-related errors and are only fully reliable for 18 zebra crossings free from significant obstacles. However, all the anomalous statistics can be analyzed by looking at the associated profiles. The results can help in the maintenance of urban roads. More specifically, they can be used to improve the quality and safety of pedestrian routesConsellería de Cultura, Educación e Ordenación Universitaria, Grant/Award Numbers: accreditation 2019-2022 ED431G-2019/04, 2022-2024, ED431C2022/16, ED481A-2020/231; European Regional Development Fund (ERDF); CiTIUS-Research Center in Intelligent Technologies of the University of Santiago de Compostela as a Research Center of the Galician University System; Ministry of Economy and Competitiveness, Government of Spain, Grant/Award Number: PID2019-104834GB-I00; National Department of Traffic (DGT) through the project Analysis of Indicators Big-Geodata on Urban Roads for the Dynamic Design of Safe School Roads, Grant/Award Number: SPIP2017-02340S

    CIMAR, NIMAR, and LMMA: novel algorithms for thread and memory migrations in user space on NUMA systems using hardware counters

    Get PDF
    This paper introduces two novel algorithms for thread migrations, named CIMAR (Core-aware Interchange and Migration Algorithm with performance Record –IMAR–) and NIMAR (Node-aware IMAR), and a new algorithm for the migration of memory pages, LMMA (Latency-based Memory pages Migration Algorithm), in the context of Non-Uniform Memory Access (NUMA) systems. This kind of system has complex memory hierarchies that present a challenging problem in extracting the best possible performance, where thread and memory mapping play a critical role. The presented algorithms gather and process the information provided by hardware counters to make decisions about the migrations to be performed, trying to find the optimal mapping. They have been implemented as a user space tool that looks for improving the system performance, particularly in, but not restricted to, scenarios where multiple programs with different characteristics are running. This approach has the advantage of not requiring any modification on the target programs or the Linux kernel while keeping a low overhead. Two different benchmark suites have been used to validate our algorithms: The NAS parallel benchmark, mainly devoted to computational routines, and the LevelDB database benchmark focused on read–write operations. These benchmarks allow us to illustrate the influence of our proposal in these two important types of codes. Note that those codes are state-of-the-art implementations of the routines, so few improvements could be initially expected. Experiments have been designed and conducted to emulate three different scenarios: a single program running in the system with full resources, an interactive server where multiple programs run concurrently varying the availability of resources, and a queue of tasks where granted resources are limited. The proposed algorithms have been able to produce significant benefits, especially in systems with higher latency penalties for remote accesses. When more than one benchmark is executed simultaneously, performance improvements have been obtained, reducing execution times up to 60%. In this kind of situation, the behaviour of the system is more critical, and the NUMA topology plays a more relevant role. Even in the worst case, when isolated benchmarks are executed using the whole system, that is, just one task at a time, the performance is not degradedThis research work has received financial support from the Ministerio de Ciencia e Innovación, Spain within the project PID2019-104834GB-I00. It was also funded by the Consellería de Cultura, Educación e Ordenación Universitaria of Xunta de Galicia (accr. 2019–2022, ED431G 2019/04 and reference competitive group 2019–2021, ED431C 2018/19)S
    corecore